Optimal Manpower Recruitment and Promotion Policies for the finitely graded systems using Dynamic Programming

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal manpower recruitment by stochastic programming in graded manpower systems

Human resource is one of the most effective resources in the development of any organization or a Nation. Optimal utilization of manpower is a pivotal challenge to a manager of a dynamic management system. Achieving the objectives by meeting the constraints on various feasibilities is the core problem before statisticians, model developers and OR scientists. In this paper we develop a model wit...

متن کامل

A DSS-Based Dynamic Programming for Finding Optimal Markets Using Neural Networks and Pricing

One of the substantial challenges in marketing efforts is determining optimal markets, specifically in market segmentation. The problem is more controversial in electronic commerce and electronic marketing. Consumer behaviour is influenced by different factors and thus varies in different time periods. These dynamic impacts lead to the uncertain behaviour of consumers and therefore harden the t...

متن کامل

Determination ofthe Optimal Manpower Size Using Linear Programming

There would be no meaningful development lllltil manpower that involves in the transformation of production facilities into useful goods and services is well trained and planned. Recent advances in mathematical programming methodology have included: development of interior methods, competing with the simplex method, improved simplex codes, vastly improved performance for mixed-integer programmi...

متن کامل

Convergence of Sample Path Optimal Policies for Stochastic Dynamic Programming

We consider the solution of stochastic dynamic programs using sample path estimates. Applying the theory of large deviations, we derive probability error bounds associated with the convergence of the estimated optimal policy to the true optimal policy, for finite horizon problems. These bounds decay at an exponential rate, in contrast with the usual canonical (inverse) square root rate associat...

متن کامل

Regular Policies in Stochastic Optimal Control and Abstract Dynamic Programming

Notation Connection with Abstract DPMapping of a stationary policy μ: For any control function μ, with μ(x) ∈ U(x) forall x , and J ∈ E(X ) define the mapping Tμ : E(X ) 7→ E(X ) by(TμJ)(x) = E{g(x , μ(x),w) + αJ(f (x , μ(x),w))}, x ∈ XValue Iteration mapping: For any J ∈ E(X ) define the mapping T : E(X ) 7→ E(X )(TJ)(x) = infu∈U(x)E{...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Heliyon

سال: 2021

ISSN: 2405-8440

DOI: 10.1016/j.heliyon.2021.e07424